Alphabet Inc. and Google CEO Sundar Pichai attends the inauguration of a Google Artificial Intelligence (AI) hub in Paris on February 15, 2024. (Photo by ALAIN JOCARD / AFP) (AFP)AI 

Google CEO Sundar Pichai criticizes errors in Gemini AI app as ‘completely unacceptable’

Google CEO Sundar Pichai criticized the “completely unacceptable” mistakes made by the Gemini AI app, which included displaying images of ethnically diverse World War II Nazi troops. As a result, the app had to prevent users from creating pictures of people. This controversy arose shortly after Google rebranded its ChatGPT-style AI to “Gemini”, giving it a significant presence in its products as it competes with OpenAI and Microsoft.

Social media users mocked and criticized Google for historically inaccurate images created by Gemini, such as 19th-century US senators that were ethnically diverse and included women.

“I would like to address recent issues related to problematic text and image responses on the Gemini app,” Pichai wrote in a letter to staff, published by news site Semafor.

“I know some of its responses have offended our users and shown bias. To be clear, this is completely unacceptable and we did it wrong.”

A Google spokesperson confirmed to AFP that the letter was genuine.

Pichai said Google teams are working “around the clock” to fix these issues, but did not say when the image creation feature would be available again.

“No artificial intelligence is perfect, especially at this stage of the industry’s development, but we know the bar is high for us and we’ll stay there for however long,” he wrote.

Tech companies see generative AI models as the next big step in computing and are racing to infuse them into everything from Internet search and customer support automation to music and art creation.

But AI models, not just Google’s, have long been criticized for perpetuating racial and gender bias in their results.

Google said last week that Gemini’s problematic responses stemmed from the company’s efforts to remove such biases.

Gemini was calibrated to show different people, but it didn’t adapt to prompts where it shouldn’t have, and it was also overly cautious on some otherwise innocuous requests, Google’s Prabhakar Raghavan wrote in a blog post.

“These two things caused the model to overcompensate in some cases and be too conservative in others, resulting in images that were awkward and wrong,” he said.

Since the explosive success of ChatGPT, there have been many concerns about AI.

Experts and governments have warned that AI also carries the risk of major economic upheaval, particularly job displacement, and industrial-scale disinformation that could manipulate elections and fuel violence.

Related posts

Leave a Comment